50 research outputs found

    Engineering shortest paths and layout algorithms for large graphs

    Get PDF

    Von Java nach C++

    Get PDF
    Dieser Text entstand am Lehrstuhl Wagner im Rahmen der Praktika, die Studierende im Hauptstudium machen müssen. Sie lernen im Grundstudium objektorientiertes Programmieren anhand der Programmiersprache Java. Es hat sich allerdings gezeigt, dass für Algorithmen und Datenstrukturen Java schlechter geeignet ist als C++. Deshalb sind wir dazu übergegangen, im ersten Teil des Praktikums C++ zu vermitteln. Dabei beschränken wir uns auf einige Teilaspekte von C++, die wir für notwendig und sinnvoll für diese Awendung erachten. Insbesondere werden hardwarenahe Teile der Programmiersprache auf eine Minimum reduziert. Es soll aber nicht gesagt werden, dass Java eine schlechte Programmiersprache wäre oder gar gänzlich unnütz. Es ist vielmehr so, dass wir Dinge, die Java bereitstellt, nicht aber dafür andere Dinge, die es nicht bereitstellt, sehr wohl benötigen. Aus unserer Sicht der Dinge stellt sich das Verhältnis von Java, C und C++ folgendermaßen dar: C++ kann seine Herkunft von C schon allein durch den Namen sicherlich nicht leugnen. Die meisten Teile von C sind in C++ enthalten. Es bietet aber durch die Mittel zum objektorientierten und generischen Programmieren weit mehr. In Java findet man (derzeit) nur die Hilfsmittel zur Objektorientierung wieder. Es setzt sich aber insbesondere bei der Speicherverwaltung stark von C und C++ ab. Auf der anderen Seite gehört eine wesentlich umfangreichere Bibliothek zu Java dazu, die z.B. die Programmierung von GUIs oder Netzwerkfähigkeit ermöglicht. Diese Dinge spielen jedoch bei Algorithmen und Datenstrukturen, im Gegensatz zur Speicherverwaltung, eine untergeordnete Rolle. Letztere ist dem Programmierer in Java jedoch gänzlich aus den Händen genommen. Darüberhinaus gibt es viele weitere Details, die es dem Programmierer ermöglichen, effizienteren Code zu schreiben. Summa summarum erscheint uns daher die Verwendung von C++ für unseren Zweck sinnvoll

    FPTree: A Hybrid SCM-DRAM Persistent and Concurrent B-Tree for Storage Class Memory

    Get PDF
    The advent of Storage Class Memory (SCM) is driving a rethink of storage systems towards a single-level architecture where memory and storage are merged. In this context, several works have investigated how to design persistent trees in SCM as a fundamental building block for these novel systems. However, these trees are significantly slower than DRAM-based counterparts since trees are latency-sensitive and SCM exhibits higher latencies than DRAM. In this paper we propose a novel hybrid SCM-DRAM persistent and concurrent B-Tree, named Fingerprinting Persistent Tree (FPTree) that achieves similar performance to DRAM-based counterparts. In this novel design, leaf nodes are persisted in SCM while inner nodes are placed in DRAM and rebuilt upon recovery. The FPTree uses Fingerprinting, a technique that limits the expected number of in-leaf probed keys to one. In addition, we propose a hybrid concurrency scheme for the FPTree that is partially based on Hardware Transactional Memory. We conduct a thorough performance evaluation and show that the FPTree outperforms state-of-the-art persistent trees with different SCM latencies by up to a factor of 8.2. Moreover, we show that the FPTree scales very well on a machine with 88 logical cores. Finally, we integrate the evaluated trees in memcached and a prototype database. We show that the FPTree incurs an almost negligible performance overhead over using fully transient data structures, while significantly outperforming other persistent trees

    Integer Compression in NVRAM-centric Data Stores: Comparative Experimental Analysis to DRAM

    Get PDF
    Lightweight integer compression algorithms play an important role in in-memory database systems to tackle the growing gap between processor speed and main memory bandwidth. Thus, there is a large number of algorithms to choose from, while different algorithms are tailored to different data characteristics. As we show in this paper, with the availability of byte-addressable non-volatile random-access memory (NVRAM), a novel type of main memory with specific characteristics increases the overall complexity in this domain. In particular, we provide a detailed evaluation of state-of-the-art lightweight integer compression schemes and database operations on NVRAM and compare it with DRAM. Furthermore, we reason about possible deployments of middle- and heavyweight approaches for better adaptation to NVRAM characteristics. Finally, we investigate a combined approach where both volatile and non-volatile memories are used in a cooperative fashion that is likely to be the case for hybrid and NVRAM-centric database systems

    CXL Memory as Persistent Memory for Disaggregated HPC: A Practical Approach

    Full text link
    In the landscape of High-Performance Computing (HPC), the quest for efficient and scalable memory solutions remains paramount. The advent of Compute Express Link (CXL) introduces a promising avenue with its potential to function as a Persistent Memory (PMem) solution in the context of disaggregated HPC systems. This paper presents a comprehensive exploration of CXL memory's viability as a candidate for PMem, supported by physical experiments conducted on cutting-edge multi-NUMA nodes equipped with CXL-attached memory prototypes. Our study not only benchmarks the performance of CXL memory but also illustrates the seamless transition from traditional PMem programming models to CXL, reinforcing its practicality. To substantiate our claims, we establish a tangible CXL prototype using an FPGA card embodying CXL 1.1/2.0 compliant endpoint designs (Intel FPGA CXL IP). Performance evaluations, executed through the STREAM and STREAM-PMem benchmarks, showcase CXL memory's ability to mirror PMem characteristics in App-Direct and Memory Mode while achieving impressive bandwidth metrics with Intel 4th generation Xeon (Sapphire Rapids) processors. The results elucidate the feasibility of CXL memory as a persistent memory solution, outperforming previously established benchmarks. In contrast to published DCPMM results, our CXL-DDR4 memory module offers comparable bandwidth to local DDR4 memory configurations, albeit with a moderate decrease in performance. The modified STREAM-PMem application underscores the ease of transitioning programming models from PMem to CXL, thus underscoring the practicality of adopting CXL memory.Comment: 12 pages, 9 figure

    Memory management techniques for large-scale persistent-main-memory systems

    Get PDF
    Storage Class Memory (SCM) is a novel class of memory technologies that promise to revolutionize database architectures. SCM is byte-addressable and exhibits latencies similar to those of DRAM, while being non-volatile. Hence, SCM could replace both main memory and storage, enabling a novel single-level database architecture without the traditional I/O bottleneck. Fail-safe persistent SCM allocation can be considered conditio sine qua non for enabling this novel architecture paradigm for database management systems. In this paper we present PAllocator, a fail-safe persistent SCM allocator whose design emphasizes high concurrency and capacity scalability. Contrary to previous works, PAllocator thoroughly addresses the important challenge of persistent memory fragmentation by implementing an efficient defragmentation algorithm. We show that PAllocator outperforms state-of-the-art persistent allocators by up to one order of magnitude, both in operation throughput and recovery time, and enables up to 2.39x higher operation throughput on a persistent B-Tree

    SOFORT: A Hybrid SCM-DRAM Storage Engine for Fast Data Recovery

    Get PDF
    Storage Class Memory (SCM) has the potential to significantly improve database performance. This potential has been well documented for throughput [4] and response time [25, 22]. In this paper we show that SCM has also the potential to significantly improve restart performance, a shortcoming of traditional main memory database systems. We present SOFORT, a hybrid SCM-DRAM storage engine that leverages full capabilities of SCM by doing away with a traditional log and updating the persisted data in place in small increments. We show that we can achieve restart times of a few seconds independent of instance size and transaction volume without significantly impacting transaction throughput

    Methoden der Diskreten Mathematik bei der Rekonstruktion der Topologie eines CAD-Datenmodells

    No full text
    Computer-aided design plays an important role in today's engineering.In this thesis, we deal with CAD data models. Each modelconsists of mesh elements and approximates the surface of a workpiece.In general, the mesh elements are not parts of a plane, and their edges arenot straight lines. More precisely, trimmed parametric surface patcheswere used in the data available to us.One of the tasks which is to be done automaticallyis the reconstruction of the so-called topology of the CAD datamodel, i.e. theinformation whether and where two mesh elements are to be regarded asimmediately neighboured. Many wide-spread dataformats for CAD models do not provide the neighbourhoods.The topology of a CAD model is important, since almost every further stepof the CAD process relies on this information

    Why CAD Data Repair Requires Discrete Techniques

    No full text
    We consider a problem of reconstructing a discrete structure from unstructured numerical data. It arises in the computer-aided design of machines, motor vehicles, and other technical devices. A CAD model consists of a set of surface pieces in the three-dimensional space (the so-called mesh elements). The neighbourhoods of these mesh elements, the topology of the model, must be reconstructed. The reconstruction is non-trivial because of erroneous gaps between neighboured mesh elements. However, a look at the real-world data from various applications strongly suggests that the pairs of neighboured mesh elements may be (nearly) correctly identified by some distance measure and some threshold. In fact, to our knowledge, this is the main strategy pursued in practice. In this paper, we make a first attempt to design systematic studies to support a negative claim: We demonstrate empirically that human intuition is misleading here, and that this approach fails miserably even for "in..
    corecore